Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@tinoeberl@mastodon.online
2024-03-02 17:18:31

"#Wildfire grows into one of largest in #Texas history as flames menace multiple small towns
A cluster of wildfires scorched the Texas Panhandle on Wednesday, including a blaze that grew into one of the largest in state history, as flames moved with alarming speed and blackened the landsca…

@beaware@social.beaware.live
2024-05-02 20:55:23

@… I feel like this is how it will be. We will all have multiple accounts and the only ones using it cross-platform will be us, following each other on each account multiple times...because it's opt-in and nobody knows what it is their opting into....🤦‍♂️

@arXiv_mathCO_bot@mastoxiv.page
2024-04-03 06:55:39

On a Conjecture Concerning the Roots of Ehrhart Polynomials of Symmetric Edge Polytopes from Complete Multipartite Graphs
Max K\"olbl
arxiv.org/abs/2404.02136

@benb@osintua.eu
2024-02-29 06:45:38

ISW: Kremlin has yet to signal its response following Transnistria's appeal for 'protection': benborges.xyz/2024/02/29/isw-k

@arXiv_quantph_bot@mastoxiv.page
2024-05-02 08:42:52

This arxiv.org/abs/2403.11893 has been replaced.
initial toot: mastoxiv.page/@arXiv_qu…

@arXiv_mathDS_bot@mastoxiv.page
2024-05-01 06:55:55

Bifurcations and explicit unfoldings of grazing loops connecting one high multiplicity tangent point
Zhihao Fang, Xingwu Chen
arxiv.org/abs/2404.19455

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:48:59

Do Large Language Models Understand Conversational Implicature -- A case study with a chinese sitcom
Shisen Yue, Siyuan Song, Xinyuan Cheng, Hai Hu
arxiv.org/abs/2404.19509 arxiv.org/pdf/2404.19509
arXiv:2404.19509v1 Announce Type: new
Abstract: Understanding the non-literal meaning of an utterance is critical for large language models (LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp, the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourced from dialogues in the Chinese sitcom $\textit{My Own Swordsman}$. It includes 200 carefully handcrafted questions, all annotated on which Gricean maxims have been violated. We test eight close-source and open-source LLMs under two tasks: a multiple-choice question task and an implicature explanation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models, including GPT-3.5 and several open-source models, demonstrate a lower accuracy ranging from 20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation of the implicatures generated by LLMs on their reasonability, logic and fluency. While all models generate largely fluent and self-consistent text, their explanations score low on reasonability except for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of the implicatures in the conversation. Moreover, we find LLMs' performance does not vary significantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derived from different maxims differently. Our data and code are available at github.com/sjtu-compling/llm-p.

@philip@mastodon.mallegolhansen.com
2024-04-30 22:49:49

@… There’s multiple distinct ways in which I miss the “good old days”:
1. The feeling of the web not being “serious”, in the sense that nothing in the world seemed to truly *depend* on the web. You could hang out with friends, turn in homework, do banking (Although I wasn’t old enough that it mattered to me yet), all without the web. The web was an *addi…

@benb@osintua.eu
2024-02-27 07:50:37

Military: Russian drone flew across Moldovan border: benborges.xyz/2024/02/27/milit

@arXiv_mathDS_bot@mastoxiv.page
2024-05-02 08:33:15

This arxiv.org/abs/2208.02833 has been replaced.
link: scholar.google.com/scholar?q=a